3/07/2020

Our project

We decided to do central Colombia, basically because it is where the capital is.

We built a model for the number of confirmed cases using all the others covariates (plus some we created) and we estimated the predictive accuracy of our selected model.

Loading the dataset

Our cities

We decided to consider as central Colombia the following cities:

  • Bogotà DC,
  • Boyacá,
  • Tolima,
  • Cundinamarca,
  • Meta,
  • Quindío,
  • Valle del Cauca,
  • Risaralda, Celdas,
  • Boyacá,
  • Antioquia,
  • Santander
  • Casanare

Description of variables

ID de caso: ID of the confirmed case.

Fecha de diagnóstico: Date in which the disease was diagnosed.

Ciudad de ubicación: City where the case was diagnosed.

Departamento o Distrito: Department or distric where the city belongs to.

Atención: Situation of the pacient: recovered, at home, at the hospital, at the ICU or deceased.

Edad: Age of the confirmed case.

Sexo: Sex of the confirmed cade.

Tipo: How the person got infected: in Colombia, abroad or unknown.

País de procedencia: Country of origin if the person got infected abroad.

Map

Here we can see our selected cities. The color of the pins is related with the number of cases: if they are less than \(10\) the color is “green”, if they are less than \(100\) the color is “orange”, otherwise it is “red”.

Preprocessing

We had to clean the dataset:

  • We transformed the Fecha de diagnóstico variable into a Date variable,

  • we fixed the variable Id de caso (some rows were missing so the numbers weren’t consecutive),

  • we created a variable Grupo de edad,

  • we transformed the variables Grupo de edad, Departamento o Distrito, Ciudad de ubicación, Sexo, Atención, Tipo into factors,

  • we cleaned the column País de procedencia and created the variable Continente de procedencia.

##   ID de caso Fecha de diagnóstico Ciudad de ubicación Departamento o Distrito
## 1          1           2020-03-06              Bogotá             Bogotá D.C.
## 2          2           2020-03-09                Buga         Valle del Cauca
## 3          3           2020-03-09            Medellín               Antioquia
## 4          4           2020-03-11            Medellín               Antioquia
## 5          5           2020-03-11            Medellín               Antioquia
## 6          6           2020-03-11              Itagüí               Antioquia
##     Atención Edad Sexo        Tipo País de procedencia Grupo de edad
## 1 Recuperado   19    F   Importado              Italia         19_30
## 2 Recuperado   34    M   Importado              España         31_45
## 3 Recuperado   50    F   Importado              España         46_60
## 4 Recuperado   55    M Relacionado            Colombia         46_60
## 5 Recuperado   25    M Relacionado            Colombia         19_30
## 6       Casa   27    F Relacionado            Colombia         19_30
##   Continente de procedencia
## 1                    Europa
## 2                    Europa
## 3                    Europa
## 4                  Colombia
## 5                  Colombia
## 6                  Colombia

New datasets

New datasets part2

Exploring the dataset

Number of cases confirmed day by day

Other plots

Here the growth seems exponential (and this is consistent with the fact that we are studying the early stages of the outbreak).

brks <- seq(-250, 250, 50)
lbls <- as.character(c(seq(-250, 0, 50), seq(50, 250, 50)))

ggplot(data=colombia_covid, aes(x=`Departamento o Distrito`, fill = Sexo)) +  
                              geom_bar(data = subset(colombia_covid, Sexo == "F")) +
                              geom_bar(data = subset(colombia_covid, Sexo == "M"), aes(y=..count..*(-1))) + 
                              scale_y_continuous(breaks = brks,
                                               labels = lbls) + 
                              coord_flip() +  
                              labs(title="Spread of the desease across genders",
                                   y = "Number of cases",
                                   x = "Department",
                                   fill = "Gender") +
                              theme_tufte() +  
                              theme(plot.title = element_text(hjust = .5), 
                                    axis.ticks = element_blank()) +   
                              scale_fill_brewer(palette = "Dark3")  

#compute percentage so that we can label more precisely the pie chart
age_groups_pie <- colombia_covid %>% 
  group_by(`Grupo de edad`) %>%
  count() %>%
  ungroup() %>%
  mutate(per=`n`/sum(`n`)) %>% 
  arrange(desc(`Grupo de edad`))
age_groups_pie$label <- scales::percent(age_groups_pie$per)

age_pie <- ggplot(age_groups_pie, aes(x = "", y = per, fill = factor(`Grupo de edad`))) + 
  geom_bar(stat="identity", width = 1) +
  theme(axis.line = element_blank(), 
        plot.title = element_text(hjust=0.5)) + 
  labs(fill="Age groups", 
       x=NULL, 
       y=NULL, 
       title="Distribution of the desease across ages") +
  coord_polar(theta = "y") +
  #geom_text(aes(x=1, y = cumsum(per) - per/2, label=label))
  geom_label_repel(aes(x=1, y=cumsum(per) - per/2, label=label), size=3, show.legend = F, nudge_x = 0) +
  guides(fill = guide_legend(title = "Group"))
  
age_pie 

Age-Sex plot

theme_set(theme_classic())

ggplot(colombia_covid, aes(x = `Fecha de diagnóstico`)) +
  scale_fill_brewer(palette = "Set3") +
  geom_histogram(aes(fill=Tipo), width = 0.8, stat="count") +
  theme(axis.text.x = element_text(angle=65, vjust=0.6)) +
  labs(title = "Daily number of confirmed cases", 
       subtitle = "subdivided across type",
       x = "Date of confirmation",
       fill = "Type")

Tipo

I think that en estudio means that it is not clear while the case is imported or not, however it seems like there are more imported cases, we can count them:

type_pie <- colombia_covid %>% 
  group_by(Tipo) %>%
  count() %>%
  ungroup() %>%
  mutate(per=`n`/sum(`n`)) %>% 
  arrange(desc(Tipo))
type_pie$label <- scales::percent(type_pie$per)
type_pie<-type_pie[names(type_pie)!="per"]
colnames(type_pie)<-c("Tipo", "Total number", "Percentage")
type_pie
## # A tibble: 3 x 3
##   Tipo        `Total number` Percentage
##   <fct>                <int> <chr>     
## 1 Relacionado            291 29.3%     
## 2 Importado              467 47.0%     
## 3 En estudio             235 23.7%

The frequentist approach

Poisson with Elapsed time as predictor

## [1] "Estimated overdispersion 11.0591714112992"
## [1] "AIC: 446.827929331767"
## [1] "Null deviance:  7859.81"   "Residual deviance: 280.62"

Angela’s attempt

## 
## Call:
## glm(formula = `Cumulative cases/Department` ~ `Elapsed time`, 
##     family = poisson, data = cases_relev_dep)
## 
## Deviance Residuals: 
##      Min        1Q    Median        3Q       Max  
## -13.9994   -4.7902   -2.1236    0.8586   16.3553  
## 
## Coefficients:
##                Estimate Std. Error z value Pr(>|z|)    
## (Intercept)    0.982659   0.062709   15.67   <2e-16 ***
## `Elapsed time` 0.167306   0.002779   60.20   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 8732.6  on 82  degrees of freedom
## Residual deviance: 3855.1  on 81  degrees of freedom
## AIC: 4285.4
## 
## Number of Fisher Scoring iterations: 5

New

## [1] "AIC: 317.731647373115"
## [1] "Null deviance:  870.39"    "Residual deviance: 191.78"

Poisson with time plus gender

poisson2 <- glm(`Cumulative cases` ~ `Elapsed time` + Sexo_M, data=data1, family=poisson)
par(mfrow=c(2,2))
plot(poisson2)

paste("AIC:", poisson2$aic)
## [1] "AIC: 448.106201285854"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson2$null.deviance, deviance(poisson2)), 2))
## [1] "Null deviance:  7859.81"  "Residual deviance: 279.9"

Poisson with Elapsed time plus Group de edad

poisson3 <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+`, data=data1, family=poisson)
par(mfrow=c(2,2))
plot(poisson3)

pred.pois3 <- poisson3$fitted.values
res.st3 <- (data1$`Cumulative cases` - pred.pois3)/sqrt(pred.pois3)
#n=25 
#k=7
#n-k=18
print(paste("Estimated overdispersion", est.overdispersion <- sum(res.st3^2)/18))
## [1] "Estimated overdispersion 10.001532070592"
paste("AIC:", poisson3$aic)
## [1] "AIC: 376.950305422531"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson3$null.deviance, deviance(poisson3)), 2))
## [1] "Null deviance:  7859.81"   "Residual deviance: 200.75"

Poisson with Elapsed time, Age and ’Departments` as predictors

#Running poisson with the variable representing time, age and departments as predictors
poisson4 <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.` + `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` + `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` + `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` + `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` + `Departamento o Distrito_Santander` + `Departamento o Distrito_Tolima` + `Departamento o Distrito_Valle del Cauca`, data=data1, family=poisson)
par(mfrow=c(2,2))
plot(poisson4)

pred.pois4 <- poisson4$fitted.values
res.st4 <- (data1$`Cumulative cases` - pred.pois4)/sqrt(pred.pois4)
#n=25 
#k=19
#n-k=6
print(paste("Estimated overdispersion", est.overdispersion <- sum(res.st4^2)/6))
## [1] "Estimated overdispersion 0.516840130836562"
paste("AIC:", poisson4$aic)
## [1] "AIC: 203.506208494973"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson4$null.deviance, deviance(poisson4)), 2))
## [1] "Null deviance:  7859.81" "Residual deviance: 3.3"

Poisson with Elapsed time, Age, Departments and Continent of origin as predictors

#poisson5 <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.`+`Departamento o Distrito_Boyacá`+`Departamento o Distrito_Caldas` + `Departamento o Distrito_Casanare`+`Departamento o Distrito_Cauca`+`Departamento o Distrito_Cundinamarca`+`Departamento o Distrito_Meta`+`Departamento o Distrito_Quindío`+`Departamento o Distrito_Risaralda`+`Departamento o Distrito_Santander`+`Departamento o Distrito_Tolima`+`Departamento o Distrito_Valle del Cauca`+`Continente de procedencia_Asia`+`Continente de procedencia_Centroamérica`+`Continente de procedencia_Colombia`+`Continente de procedencia_Europa`+`Continente de procedencia_Norteamérica` + `Continente de procedencia_Sudamerica`, data=data1, family=poisson)
#par(mfrow=c(2,2))
#plot(poisson5)
#paste("AIC:", poisson5$aic)
#paste(c("Null deviance: ", "Residual deviance:"),
#       round(c(poisson5$null.deviance, deviance(poisson5)), 2))

ANOVA to compare the Poisson models

#anova(poisson1, poisson3, poisson4, poisson5, test="Chisq")
anova(poisson1, poisson3, poisson4, test="Chisq")
## Analysis of Deviance Table
## 
## Model 1: `Cumulative cases` ~ `Elapsed time`
## Model 2: `Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + 
##     `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + 
##     `Grupo de edad_76+`
## Model 3: `Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + 
##     `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + 
##     `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.` + 
##     `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` + 
##     `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` + 
##     `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` + 
##     `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` + 
##     `Departamento o Distrito_Santander` + `Departamento o Distrito_Tolima` + 
##     `Departamento o Distrito_Valle del Cauca`
##   Resid. Df Resid. Dev Df Deviance  Pr(>Chi)    
## 1        23    280.624                          
## 2        18    200.747  5   79.878 8.901e-16 ***
## 3         6      3.303 12  197.444 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Quasi Poisson with Elapsed time as predictor

poisson1quasi <- glm(`Cumulative cases` ~ `Elapsed time`, data=cases, family=quasipoisson)
par(mfrow=c(2,2))
plot(poisson1quasi)

pred.poisq <- poisson1quasi$fitted.values
res.stq <- (data1$`Cumulative cases` - pred.poisq)/sqrt(summary(poisson1quasi)$dispersion*pred.poisq)
print(paste("Estimated overdispersion", sum(res.stq^2)/23))
## [1] "Estimated overdispersion 0.999988649853625"
paste("AIC:", poisson1quasi$aic)
## [1] "AIC: NA"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson1quasi$null.deviance, deviance(poisson1quasi)), 2))
## [1] "Null deviance:  7859.81"   "Residual deviance: 280.62"

Quasi Poisson with Elapsed time and Age as predictor

#Let's apply a quasi poisson and see what happens
poisson2quasi <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+`, data=data1, family=quasipoisson)
par(mfrow=c(2,2))
plot(poisson1quasi)

pred.poisq2 <- poisson2quasi$fitted.values
res.stq2 <- (data1$`Cumulative cases` - pred.poisq2)/sqrt(summary(poisson2quasi)$dispersion*pred.poisq2)
print(paste("Estimated overdispersion", sum(res.stq2^2)/18))
## [1] "Estimated overdispersion 0.999984520837828"
paste("AIC:", poisson2quasi$aic)
## [1] "AIC: NA"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson2quasi$null.deviance, deviance(poisson2quasi)), 2))
## [1] "Null deviance:  7859.81"   "Residual deviance: 200.75"

Running negative binomial with just the variable representing the time as predictor

#Running negative binomial with just the variable representing the time as predictor
nb1 <- glm.nb(`Cumulative cases` ~ `Elapsed time`, data=data1)

The Bayesian approach

Poisson regression

As a first attempt, we fit a simple Poisson regression:

\[ ln\lambda_i = \alpha + \beta\cdot elapsed\_time_i \\ y_i \sim \mathcal{Poisson}(\lambda_i) \]

with \(i = 1,\dots,83\), and \(y_i\) represents the number of cases.

## 
## SAMPLING FOR MODEL 'poisson_regression' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 3e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.3 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.169168 seconds (Warm-up)
## Chain 1:                0.135105 seconds (Sampling)
## Chain 1:                0.304273 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'poisson_regression' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 1.3e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.207742 seconds (Warm-up)
## Chain 2:                0.138894 seconds (Sampling)
## Chain 2:                0.346636 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'poisson_regression' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 1.5e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.156327 seconds (Warm-up)
## Chain 3:                0.138992 seconds (Sampling)
## Chain 3:                0.295319 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'poisson_regression' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 2e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.199453 seconds (Warm-up)
## Chain 4:                0.122217 seconds (Sampling)
## Chain 4:                0.32167 seconds (Total)
## Chain 4:
## Inference for Stan model: poisson_regression.
## 4 chains, each with iter=2000; warmup=1000; thin=1; 
## post-warmup draws per chain=1000, total post-warmup draws=4000.
## 
##       mean se_mean   sd  2.5%  25%  50%  75% 97.5% n_eff Rhat
## alpha 0.18       0 0.13 -0.07 0.10 0.18 0.26  0.43   897 1.01
## beta  0.12       0 0.01  0.11 0.11 0.12 0.12  0.13   893 1.00
## 
## Samples were drawn using NUTS(diag_e) at Thu Jul  2 22:23:27 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).

Looking at Rhat we can see that we have reached the convergence.

theme_set(bayesplot::theme_default())

mcmc_scatter(as.matrix(fit.model.Poisson, pars=c("alpha", "beta") ), alpha=0.2)

Check the posterior:

y_rep <- as.matrix(fit.model.Poisson, pars="y_rep")
ppc_dens_overlay(y = model.data$cases, y_rep[1:200,]) 

The model is not able to capture low and high numbers of new cases.

The fit is not satisfactory, it is probably due to overdispersion, we can check the residuals to confirm this hypothesis:

#in this way we check the standardized residuals
mean_y_rep<-colMeans(y_rep)
std_residual<-(model.data$cases - mean_y_rep) / sqrt(mean_y_rep)
qplot(mean_y_rep, std_residual) + hline_at(2) + hline_at(-2)

The variance of the residuals increases as the predicted value increase. The standardized residuals should have mean 0 and standard deviation 1 (hence the lines at +2 and -2 indicates approximate 95% error bounds). The variance of the standardized residuals is much greater than 1, indicating a large amount of overdispersion.

Classically the problem of having overdispersed data is solved using the negative binomial model instead of the Poisson one.

Angela stan

## 
## SAMPLING FOR MODEL 'poisson_regression' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.3e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.085021 seconds (Warm-up)
## Chain 1:                0.060986 seconds (Sampling)
## Chain 1:                0.146007 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'poisson_regression' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 7e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.071645 seconds (Warm-up)
## Chain 2:                0.06603 seconds (Sampling)
## Chain 2:                0.137675 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'poisson_regression' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 9e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.123743 seconds (Warm-up)
## Chain 3:                0.065924 seconds (Sampling)
## Chain 3:                0.189667 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'poisson_regression' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 8e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.130585 seconds (Warm-up)
## Chain 4:                0.085266 seconds (Sampling)
## Chain 4:                0.215851 seconds (Total)
## Chain 4:
## Inference for Stan model: poisson_regression.
## 4 chains, each with iter=2000; warmup=1000; thin=1; 
## post-warmup draws per chain=1000, total post-warmup draws=4000.
## 
##       mean se_mean   sd 2.5%  25%  50%  75% 97.5% n_eff Rhat
## alpha 2.46       0 0.05 2.35 2.42 2.46 2.49  2.57   600 1.01
## beta  0.17       0 0.00 0.17 0.17 0.17 0.17  0.17   610 1.01
## 
## Samples were drawn using NUTS(diag_e) at Thu Jul  2 22:23:37 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).

Negative binomial model